perm filename CHAP2[4,KMC]11 blob
sn#064004 filedate 1973-09-25 generic text, type T, neo UTF8
00100 EXPLANATIONS AND MODELS
00200 The Nature of Explanation
00300 It is perhaps as difficult to explain explanation itself as
00400 it is to explain anything else. (Nothing, except everything, explains
00500 anything). The explanatory practices of different sciences differ
00600 widely but they all share the purpose of someone attempting to answer
00700 someone else's (or his own) why-how-what-etc. questions about a
00800 situation, event, episode, object or phenomenon. Thus explanation
00900 implies a dialogue whose participants share some interests, beliefs,
01000 and values. A consensus must exist about what are admissable and
01100 appropriate questions and answers. The participants must agree on
01200 what is a sound and reasonable question and what is a relevant,
01300 intelligible, and (believed) correct answer. The explainer tries to
01400 satisfy a questioner's curiosity by making comprehensible why
01500 something is the way it is. The answer may be a definition, an
01600 example, a synonym, a story, a theory, a model-description, etc. The
01700 answer attempts to satisfy curiosity by settling belief. A scientific
01800 explanation aims at convergence of belief in the relevant expert
01900 community.
02000 Suppose a man dies and a questioner (Q) asks an explainer (E):
02100 Q: Why did the man die?
02200 One answer might be:
02300 E: Because he took cyanide.
02400 This explanation might be sufficient to satisfy Q's curiosity and he
02500 and he stops asking further questions. Or he might continue:
02600 Q. Why did the cyanide kill him?
02700 and E replies:
02800 E: Anyone who ingests cyanide dies.
02900 This explanation appeals to a universal generalization under which is
03000 subsumed the particular fact of this man's death. Subsumptive
03100 explanations satisfy some questioners but not others who, for
03200 example, might want to know about the physiological mechanisms
03300 involved.
03400 Q: How does cyanide work in causing death?
03500 E: It stops respiration so the person dies from lack of oxygen.
03600 If Q has biochemical interests he might inquire further:
03700 Q:What is cyanide's mechanism of drug action on the
03800 respiratory center?
03900 The last two questions refers to causes. When human action is
04000 to be explained, confusion easily arises between appealing to
04100 physical, mechanical causes and appealing to symbolic-level reasons,
04200 that is, learned, acquired procedures or strategies which seem to be
04300 of a different ontological order. (See Toulmin, 1971).
04400 It is established clinical knowledge that the phenomena of
04500 the paranoid mode can be found associated with a variety of physical
04600 disorders. For example, paranoid thinking can be found in patients
04700 with head injuries, hyperthyroidism, hypothyroidism, uremia,
04800 pernicious anemia, cerebral arteriosclerosis, congestive heart
04900 failure, malaria and epilepsy. Also drug intoxications due to
05000 alcohol, amphetamines, marihuana and LSD can be accompanied by the
05100 paranoid mode. In these cases the paranoid mode is not a first-order
05200 disorder but a way of processing information in reaction to some
05300 other underlying disorder. To account for the association of paranoid
05400 thought with these physical states of illness, a psychological
05500 theorist might be tempted to hypothesize that a purposive cognitive
05600 system would attempt to explain ill health by attributing it to other
05700 malevolent human agents. But before making such an explanatory move,
05800 we must consider the at-times elusive distinction between reasons and
05900 causes in explanations of human behavior.
06000 One view of the association of the paranoid mode with
06100 physical disorders might be that the physical illness simply causes
06200 the paranoia ,through some unknown mechanism, at a physical level
06300 beyond the influence of deliberate self-direction and self-control.
06400 That is, the resultant paranoid mode represents something that
06500 happens to a person as victim, not something that he does as an
06600 active agent. Mechanical causes thus provide one type of reason in
06700 explaining behavior. Another view is that the paranoid mode can be
06800 explained in terms of symbolically represented reasons consisting of
06900 rules and patterns of rules which specify an agent's intentions and
07000 beliefs. In a given situation does a person as an agent recognize,
07100 monitor and control what he is doing or trying to do? Or does it
07200 just happen to him automatically without conscious deliberation?
07300 This question raises a third view, namely that unrecognized
07400 reasons, aspects of the symbolic representation which are sealed off
07500 and inacessible to voluntary control, can function like causes. If
07600 they can be brought to consciousness, such reasons can sometimes be
07700 modified voluntarily by the agent, as a language user, by reflexively
07800 talking to and instructing himself. This second-order monitoring and
07900 control through language contrasts with an agent's inability to
08000 modify mechanical causes or symbolic reasons which lie beyond the
08100 influence of self-criticism and self-emancipation carried out through
08200 linguistically mediated argumentation. Timeworn conundrums about
08300 concepts of free-will, determinism, responsibility, consciousness and
08400 the powers of mental action here plague us unless we can take
08500 advantage of a computer analogy in which a clear and useful
08600 distinction is drawn between levels of mechanical hardware and
08700 symbolically represented programs. This important distinction will be
08800 elaborated on shortly.
08900
09000 Each of these three views provides a serviceable perspective
09100 depending on how a disorder is to be explained and corrected. When
09200 paranoid processes occur during amphetamine intoxication they can be
09300 viewed as biochemically caused and beyond the patient's ability to
09400 control volitionally through internal self-correcting dialogues with
09500 himself. When a paranoid moment occurs in a normal person, it can be
09600 viewed as involving a symbolic misinterpretation. If the paranoid
09700 misinterpretation is recognized as unjustified, a normal person has
09800 the emancipatory power to revise or reject it through internal
09900 debate. Between these extremes of drug-induced paranoid states and
10000 the self-correctible paranoid moments of the normal person, lie cases
10100 of paranoid personalities paranoid reactions and the paranoid mode
10200 associated with the major psychoses (schizophrenic and
10300 manic-depressive).
10400 One opinion has it that the major psychoses are a consequence
10500 of unknown physical causes and are beyond deliberate voluntary
10600 control. But what are we to conclude about paranoid personalities
10700 and paranoid reactions where no hardware disorder is detectable or
10800 suspected? Are such persons to be considered patients to whom
10900 something is mechanically happening at the physical level or are they
11000 agents whose behavior is a consequence of what they do at the
11100 symbolic level? Or are they both agent and patient depending on on
11200 how one views the self-modifiability of their symbolic processing?
11300 In these perplexing cases we shall take the position that in normal,
11400 neurotic and characterological paranoid modes, the psychopathlogy
11500 represents something that happens to a man as a consequence of what
11600 he has experientially undergone, of something he now does, and
11700 something he now undergoes. Thus he is both agent and victim whose
11800 symbolic processes have powers to do and liabilities to undergo.
11900 His liabilities are reflexive in that he is victim to, and can
12000 succumb to, his own symbolic structures.
12100
12200 From this standpoint I would postulate a duality at the
12300 symbolic level between reasons and causes. That is, a reason can
12400 operate as an unrecognized cause in one context and be offered as a
12500 recognized justification in another. It is, of course, not reasons
12600 themselves which operate as causes but the execution of the
12700 reason-rules which serves as a determinant of behavior. Human
12800 symbolic behavior is non-determinate to the extent that it is
12900 self-determinate. Thus the power to select among alternatives, to
13000 make some decisions freely and to change one's mind is non-illusory.
13100 When a reason is recognized to function as a cause and is accessible
13200 to self-monitoring (the monitoring of monitoring), emancipation from
13300 it can occur through change or rejection of belief. In this sense an
13400 at least two-levelled system is self-changeable and
13500 self-emancipatory, within limits.
13600 Explanations both in terms of causes and reasons can be
13700 indefinitely extended and endless questions can be asked at each
13800 level of analysis. Just as the participants in explanatory dialogues
13900 decide what is taken to be problematic, so they also determine the
14000 termini of questions and answers. Each discipline has its
14100 characteristic stopping points and boundaries.
14200 Underlying such explanatory dialogues are larger and smaller
14300 constellations of concepts which are taken for granted as
14400 nonproblematic background. Hence in considering the strategies of
14500 the paranoid mode "it goes without saying" that any living teleonomic
14600 system ,as the larger constellation , strives for maintenance and
14700 expansion of life. Also it should go without saying that, at a lower
14800 level, ion transport takes place through nerve-cell membranes. Every
14900 function of an organism can be viewed a governing a subfunction
15000 beneath and depending on a transfunction above which calls it into
15100 play for a purpose.
15200 Just as there are many alternative ways of describing, there
15300 are many alternative ways of explaining. An explanation is geared to
15400 some level of what the dialogue participants take to be the
15500 fundamental structures and processes under consideration. Since in
15600 psychiatry we cope with patients' problems using mainly
15700 symbolic-conceptual techniques,(it is true that the pill, the knife,
15800 and electricity are also available.), we are interested in aspects of
15900 human conduct which can be explained, understood, and modified at a
16000 symbol-processing level. Psychiatrists need theoretical symbolic
16100 systems from which their clinical experience can be logically derived
16200 to interpret the case histories of their patients. Otherwise they are
16300 faced with mountains indigestible data and dross. To quote Einstein:
16400 "Science is an attempt to make the chaotic diversity of our sense
16500 experience correspond to a logically uniform system of thought by
16600 correlating single experiences with the theoretic structure."
16700
16800 The Symbol Processing Viewpoint
16900
17000 Segments and sequences of human behavior can be studied from
17100 many perspectives. In this monograph I shall view sequences of
17200 paranoid symbolic behavior from an information processing standpoint
17300 in which persons are viewed as symbol users. For a more complete
17400 explication and justification of this perspective , see Newell (1973)
17500 and Newell and Simon (1972).
17600 In brief, from this vantage point we define information as
17700 knowledge in a symbolic code. Symbols are considered to be
17800 representations of experience classified as objects, events,
17900 situations and relations. A symbolic process is a symbol-manipulating
18000 activity posited to account for observable symbolic behavior such as
18100 linguistic interaction. Under the term "symbol-processing" I include
18200 the seeking, manipulating and generating of symbols.
18300 Symbol-processing explanations postulate an underlying
18400 structure of hypothetical processes, functions, strategies, or
18500 directed symbol-manipulating procedures, having the power to produce
18600 and being responsible for observable patterns of phenomena. Such a
18700 structure offers an ethogenic (ethos = conduct or character, genic =
18800 generating) explanation for sequences or segments of symbolic
18900 behavior. (See Harre and Secord,1972). From an ethogenic viewpoint,
19000 we can posit processes, functions, procedures and strategies as being
19100 responsible for and having the power to generate the symbolic
19200 patterns and sequences characteristic of the paranoid mode.
19300 "Strategies" is perhaps the best general term since it implies ways
19400 of obtaining an objective - ways which have suppleness and pliability
19500 since choice of application depends on circumstances. However
19600 I shall use all these terms interchangeably.
19700
19800 Symbolic Models
19900 Theories and models share many functions and are often
20000 considered equivalent. One important distinction lies in the fact
20100 that a theory states a subject has a certain structure but does not
20200 exhibit that structure in itself. (See Kaplan,1964). In the case of
20300 computer simulation models there exists a further useful distinction.
20400 Computer simulation models which have the ability to converse in
20500 natural language using teletypes, actualize or realize a theory in
20600 the form of a dialogue algorithm. In contrast to a verbal, pictorial
20700 or mathematical representation, such a model, as a result of
20800 interaction, changes its states over time and ends up in a state
20900 different from its initial state.
21000 Einstein once remarked, in contrasting the act of description
21100 with what is described, that it is not the function of science to
21200 give the taste of the soup. Today this view would be considered
21300 unnecessarily restrictive. For example, a major test for synthetic
21400 insulin is whether it reproduces the effects, or at least some of the
21500 effects (such as lowering blood sugar), shown by natural insulin.
21600 To test whether a simulation is successful, its effects must be
21700 compared with the effects produced by the naturally-occuring
21800 subject-process being modelled. An interactive simulation model
21900 which attempts to reproduce sequences of experienceable reality,
22000 offers an interviewer a first-hand experience with a concrete case.
22100 In constructing a
22200 computer simulation, a theory is modelled to discover a sufficiently
22300 rich structure of hypotheses and assumptions to generate the
22400 observable subject-behavior under study. A dialogue algorithm
22500 allows an observer to interact with a concrete specimen of a class in
22600 detail. In the case of our model, the level of detail is the level of
22700 the symbolic behavior of conversational language. This level is
22800 satisfying to a clinician since he can compare the model's behavior
22900 with its natural human counterparts using familiar skills of clinical
23000 dialogue. Communicating with the paranoid model by means of teletype,
23100 an interviewer can directly experience for himself a sample of the
23200 type of impaired social relationship which develops with someone in
23300 paranoid mode.
23400 An algorithm composed of symbolic computational procedures
23500 converts input symbolic structures into output symbolic structures
23600 according to certain principles. The modus operandi of such a
23700 symbolic model is simply the workings of an algorithm when run on a
23800 computer. At this level of explanation, to answer `why?' means to
23900 provide an algorithm which makes explicit how symbolic structures
24000 collaborate, interplay and interlock - in short, how they are
24100 organized to generate patterns of manifest phenomena.
24200
24300 To simulate the sequential input-output behavior of a system
24400 using symbolic computational procedures, one writes an alogorithm
24500 which, when run on a computer, produces symbolic behavior resembling
24600 that of the subject system being simulated. (Colby,1973) The
24700 resemblance is achieved through the workings of an inner posited
24800 structure in the form of an algorithm, an organization of
24900 symbol-manipulating procedures which are ethogenically responsible
25000 for the characteristic observable behavior at the input-output level.
25100 Since we do not know the structure of the "real" simulative processes
25200 used by the mind-brain, our posited structure stands as an imagined
25300 theoretical analogue, a possible and plausible organization of
25400 processes analogous to the unknown processes and serving as an
25500 attempt to explain the workings of the system under study. A
25600 simulation model is thus deeper than a pure black-box explanation
25700 because it postulates functionally equivalent processes inside the
25800 box to account for outwardly observable patterns of behavior. A
25900 simulation model constitutes an interpretive explanation in that it
26000 makes intelligible the connections between external input, internal
26100 states and output by positing intervening symbol-processing
26200 procedures operating between symbolic input and symbolic output. To
26300 be illuminating, a description of the model should make clear why and
26400 how it reacts as it does under various circumstances.
26500 Citing a universal generalization to explain an individual's
26600 behavior is unsatisfactory to a questioner who is interested in what
26700 powers and liabilities are latent behind manifest phenomena. To say
26800 "x is nasty because x is paranoid and all paranoids are nasty" may be
26900 relevant, intelligible and correct. But another type of explanation
27000 is possible: a model-explanation referring to a structure which can
27100 account for "nasty" behavior as a consequence of input and internal
27200 states of a system. A model explanation specifies particular
27300 antecedants and processes through which antecedants generate the
27400 phenomena. An ethogenic approach to explanation assumes perceptible
27500 phenomena display the regularities and nonrandom irregularities they
27600 do because of the nature of an imperceptible and inaccessible
27700 underlying structure. The posited theoretical structure is an
27800 idealization, unobservable in human heads, not because it is too
27900 small, but because it is an imaginary analogue to the inaccessible
28000 structure.
28100 When attempts are made to explain human behavior, principles
28200 in addition to those accounting for the natural order are invoked.
28300 "Nature entertains no opinions about us", said Nietzsche. But human
28400 natures do , and therein lies a source of complexity for the
28500 understanding of human conduct. Until the first quarter of the 20th
28600 century, natural sciences were guided by the Newtonian ideal of
28700 perfect process knowledge about inanimate objects whose behavior
28800 could be subsumed under lawlike generalizations. When a deviation
28900 from a law was noticed, it was the law which was subsequently
29000 modified, since by definition physical objects did not have the power
29100 to break laws. When the planet Mercury was observed to deviate from
29200 the orbit predicted by Newtonian theory, no one accused the planet of
29300 being an intentional agent disobeying a law. Instead it was suspected
29400 that something was incorrect about the theory.
29500 Subsumptive explanation is the acceptable norm in many fields
29600 but it is seldom satisfactory in accounting for particular sequences
29700 of behavior in living purposive systems. When physical bodies fall
29800 in the macroscopic world, few find it scientifically useful to post
29900 that bodies have an intention to fall . But in the case of living
30000 systems, especially ourselves, our ideal explanatory practice is
30100 teleonomically Aristotelian in utilizing a concept of intention.
30200 Consider a man participating in a high-diving contest. In
30300 falling towards the water he accelerates at the rate of 32 feet per
30400 second. Viewing the man simply as a falling body, we explain his rate
30500 of fall by appealing to a physical law. Viewing the man as a human
30600 intentionalistic agent, we explain his dive as the result of an
30700 intention to dive in a cetain way in order to win the diving contest.
30800 His conduct (in contrast to mere movement) involves an intended
30900 following of certain conventional rules for what is judged by humans
31000 to constitute, say, a swan dive. Suppose part-way down he chooses to
31100 change his position in mid-air and enter the water thumbing his nose
31200 at the judges. He cannot disobey the law of falling bodies but he can
31300 disobey or ignore the rules of diving. He can also make a gesture
31400 which expresses disrespect and which he believes will be interpreted
31500 as such by the onlookers. Our diver breaks a rule for diving but
31600 follows another rule which prescribes gestural action for insulting
31700 behavior. To explain the actions of diving and nose-thumbing, we
31800 would appeal, not to laws of natural order, but to an additional
31900 order, to principles of human order. This order is superimposed on
32000 laws of natural order and takes into account (1)standards of
32100 appropriate action in certain situations and (2) the agent's inner
32200 considerations of intention, belief and value which he finds
32300 compelling from his point of view. In this type of explanation the
32400 explanandum, that which is being explained, is the agent's informed
32500 actions, not simply his movements. When a human agent performs an
32600 action in a situation, we can ask: is the action appropriate to that
32700 situation and if not, why did the agent believe his action to be
32800 called for?
32900 Symbol-processing explanations of human conduct rely on
33000 concepts of intention, belief, action, affect, etc. These terms are
33100 close to the terms of ordinary language as is characteristic of early
33200 stages of explanations. It is also important to note that such terms
33300 are commonly utilized in describing computer algorithms which follow
33400 rules in striving to achieve goals. In an algorithm these ordinary
33500 language terms can be explicitly defined and represented.
33600 Psychiatry deals with the practical concerns of inappropriate
33700 action, belief, etc. on the part of a patient. His behavior may be
33800 inappropriate to onlookers since it represents a lapse from the
33900 expected, a contravention of the human order. It may even appear this
34000 way to the patient in monitoring and directing himself. But
34100 sometimes, as in severe cases of the paranoid mode, the patient's
34200 behavior does not appear anomalous to himself. He maintains that
34300 anyone who understands his point of view, who conceptualizes
34400 situations as he does from the inside, would consider his outward
34500 behavior appropriate and justified. What he does not understand or
34600 accept is that his inner conceptualization is mistaken and represents
34700 a misinterpretation of the events of his experience.
34800 The model to be presented in the sequel constitutes an
34900 attempt to explain some regularities and particular occurrences of
35000 symbolic (conversational) paranoid behavior observable in the
35100 clinical situation of a psychiatric interview. The explanation is
35200 at the symbol-processing level of linguistically communicating agents
35300 and is cast in the form of a dialogue algorithm. Like all
35400 explanations, it is tentative, incomplete, and does not claim to
35500 represent the only conceivable structure of processes .
35600
35700 The Nature of Algorithms
35800
35900 Theories can be presented in various forms: prose essays,
36000 mathematical equations and computer programs. To date most
36100 theoretical explanations in psychiatry and psychology have consisted
36200 of natural language essays with all their well-known vagueness and
36300 ambiguities. Many of these formulations have been untestable, not
36400 because relevant observations were lacking but because it was unclear
36500 what the essay was really saying. Clarity is needed. Science may
36600 begin with metaphors but it should end up with algorithms.
36700 An alternative way of formulating psychological theories is
36800 now available in the form of symbol-processing algorithms, computer
36900 programs, which have the virtue of being explicit in their
37000 articulation and which can be run on a computer to test internal
37100 consistency and external correspondence with the data of observation.
37200 The subject-matter or subject of a model is what it is a model of;
37300 the source of a model is what it is based upon. Since we do not know
37400 the "real" algorithms used by people, we construct a theoretical
37500 model, based upon computer algorithms. This model represents a
37600 partial analogy. (Harre, 1970). The analogy is made at the symbol-
37700 processing level, not at the hardware level. A functional,
37800 computational or procedural equivalence is being postulated. The
37900 question then becomes one of categorizing the extent of the
38000 equivalence. A beginning (first-approximation) functional
38100 equivalence might be defined as indistinguishability at the level of
38200 observable I-O pairs. A stronger equivalence would consist of
38300 indistinguishability at inner I-O levels. That is, there exists a
38400 correspondence between what is being done and how it is being done at
38500 a given operational level.
38600 An algorithm represents an organization of symbol-processing
38700 strategies or functions which represent an "effective procedure". An
38800 effective procedure consists of three compoments:
38900
39000 (1) A programming language in which procedural rules of
39100 behavior can be rigorously and unambiguously specified.
39200 (2) An organization of procedural rules which constitute
39300 the algorithm.
39400 (3) A machine processor which can rapidly and reliably carry
39500 out the processes specified by the procedural rules.
39600 The specifications of (2), written in the formally defined
39700 programming language of (1), is termed an algorithm or program
39800 whereas (3) involves a computer as the machine processor, a set of
39900 deterministic physical mechanisms which can perform the operations
40000 specified in the algorithm. The algorithm is called `effective'
40100 because it actually works, performing as intended and producing the
40200 effects desired bt the model builders when run on the machine
40300 processor.
40400 A simulation model is composed of procedures taken to be
40500 analogous to the imperceptible and inaccessible procedures. We
40600 are not claiming they ARE analogous, we are MAKING them so. The
40700 analogy being drawn here is between specified processes and their
40800 generating systems. Thus, in comparing mental processes to
40900 computational processes, we might assert:
41000
41100 mental process computational process
41200 --------------:: ----------------------
41300 brain hardware computer hardware and
41400 and programs programs
41500
41600 Many of the classiclal mind-brain problems arose because
41700 there did not exist a familiar, well-understood analogy to help
41800 people imagine how a system could work having a clear separation
41900 between its hardware descriptions and its program descriptions. With
42000 the advent of computers and programs some mind-brain perplexities
42100 disappear. (Colby,1971). The analogy is not simply between computer
42200 hardware and brain wetware. We are not comparing the structure of
42300 neurons with the structure of transistors; we are comparing the
42400 organization of symbol-processing procedures in an algorithm with
42500 symbol-processing procedures of the mind-brain. The central nervous
42600 system contains a representation of the experience of its holder. A
42700 model builder has a conceptual representation of that representation
42800 which he demonstrates in the form of a model. Thus the model is a
42900 demonstration of a representation of a representation.
43000 An algorithm can be run on a computer in two forms, a
43100 compiled version and an interpreted version. In the compiled version
43200 a preliminary translation has been made from the higher-level
43300 programming language (source language) into lower-level machine
43400 language (object language) which controls the on-off state of
43500 hardware switching devices. When the compiled version is run, the
43600 instructions of the machine-language code are directly executed. In
43700 the interpreted version each high-level language instruction is first
43800 translated into machine language, executed, and then the process is
43900 repeated with the next instruction. One important aspect of the
44000 distinction bewteen compiled and interpreted versions is that the
44100 compiled version, now written in machine language, is not easily
44200 accessible to change using the higher-level language. In order to
44300 change the program, the interpreted version must be modified in the
44400 source language and then re-compiled into the object language. The
44500 rough analogy with ever-changing human symbolic behavior lies in
44600 suggesting that modifications require change at the source-language
44700 level. Otherwise compiled algorithms are inaccessible to second order
44800 monitoring and modification.
44900 Since we are taking running computer programs as a source of
45000 analogy for a paranoid model, logical errors or pathological behavior
45100 on the part of such programs are of interest to the
45200 psychopathologist. These errors can be ascribed to the hardware
45300 level, to the interpreter or to the programs which the interpreter
45400 executes. Different remedies are required at different levels. If
45500 the analogy is to be clinically useful in the case of human
45600 pathological behavior, it will become a matter of influencing
45700 symbolic behavior with the appropriate techniques.
45800 Since the algorithm is written in a programming language, it
45900 is hermetic except to a few people, who in general do not enjoy
46000 reading other people's code. Hence the intelligibility and
46100 scrutability requirement for explanations must be met in other ways.
46200 In an attempt to open the algorithm to scrutiny I shall describe the
46300 model in detail using diagrams and interview examples profusely.
46400
46500
46600 Analogy
46700
46800 I have stated that an interactive simulation model of
46900 symbol-manipulating processes reproduces sequences of symbolic
47000 behavior at the level of linguistic communication. The reproduction
47100 is achieved through the operations of an algorithm consisting of an
47200 organization of hypothetical symbol-processing strategies or
47300 procedures which can generate the I-O behavior of the subject-
47400 processes under investigation.The algorithm is an "effective
47500 procedure" in the sense it really works in the manner intended by the
47600 model-builders. In the model to be described, the paranoid algorithm
47700 generates linguistic I-O behavior typical of patients whose
47800 symbol-processing is dominated by the paranoid mode. Comparisons can
47900 be made between samples of the I-O behaviors of patients and model.
48000 But the analogy is not to be drawn at this level. Mynah birds and
48100 tape recorders also reproduce human linguistic behavior but no one
48200 believes the reproduction is achieved by powers analogous to human
48300 powers. Given that the manifest outermost I-O behavior of the model
48400 is indistinguishable from the manifest outward I-O behavior of
48500 paranoid patients, does this imply that the hypothetical underlying
48600 processes used by the model are analogous to (or perhaps the same
48700 as?) the underlying processes used by persons in the paranoid mode?
48800 This deep and far-reaching question should be approached with caution
48900 and only when we are first armed with some clear notions about
49000 analogy, similarity, faithful reproduction, indistinguishability and
49100 functional equivalence.
49200 In comparing two things (objects, systems or processes ) one
49300 can cite properties they have in common (positive analogy),
49400 properties they do not share (negative analogy) and properties which
49500 we do not yet know whether they are positive or negative (neutral
49600 analogy). (See Hesse,1966). No two things are exactly alike in every
49700 detail. If they were identical in respect to all their properties
49800 then they would be copies. If they were identical in every respect
49900 including their spatio-temporal location we would say we have only
50000 one thing instead of two. Everything resembles something else and
50100 maybe everything else, depending upon how one cites properties.
50200 In an analogy a similarity relation is evoked. "Newton did
50300 not show the cause of the apple falling but he showed a similitude
50400 between the apple and the stars."(D`Arcy Thompson). Huygens suggested
50500 an analogy between sound waves and light waves in order to understand
50600 something less well-understood (light) in terms of something better
50700 understood (sound). To account for species variation, Darwin
50800 postulated a process of natural selection. He constructed an
50900 analogy from two sources, one from artificial selection as practiced
51000 by domestic breeders of animals and one from Malthus' theory of a
51100 competition for existence in a population increasing geometrically
51200 while its resources increase arithmetically. Bohr's model of the atom
51300 offered an analogy between solar system and atom. These well-known
51400 historical examples should be sufficient here to illustrate the role
51500 of analogies in theory construction. Analogies are made in respect
51600 to those properties which constitute the positive and neutral
51700 analogy. The negative analogy is ignored. Thus Bohr's model of
51800 the atom as a miniature planetary system was not intended to suggest
51900 that electrons possessed color or that planets jumped out of their
52000 orbits.
52100
52200 Functional Equivalence
52300
52400 When human symbolic processes are the subject of a simulation
52500 model, we draw the analogy from two sources, symbolic computation and
52600 psychology. The analogy made is between systems known to have the
52700 power to process symbols, namely, persons and computers. The
52800 properties compared in the analogy are obviously not physical or
52900 substantive such as blood and wires, but functional and procedural.
53000 We want to assume that not-well-understood mental procedures in a
53100 person are similar to the more accessible and better understood
53200 procedures of symbol-processing which take place in a computer. The
53300 analogy is one of functional or procedural equivalence. (For a
53400 further account of functional analysis see Hempel,1965).
53500 Mousetraps are functionally equivalent. There exists a large set
53600 of physical mechanisms for catching mice. The term "mousetrap" says
53700 what each member the set has in common. Each takes as input a live
53800 mouse and yields as output a dead one. Systems equivalent from one
53900 point of view may not be equivalent from another (Fodor,1968).
54000 If model and human are indistinguishable at the manifest
54100 level of linguistic I-O pairs, then they can be considered equivalent
54200 at that level. If they can be shown to be indistinguishable at
54300 more internal symbolic levels, then a stronger exists. How stringent
54400 and how extensive are the demands for equivalence to be? Must
54500 there be point-to-point correspondences at every level? What is to
54600 count as a point and what are the levels? Procedures can be specified
54700 and ostensively pointed to in an algorithm, but how can we point to
54800 unobservable symbolic processes in a person's head? There is an
54900 inevitable limit to scrutinizing the "underlying" processes of the
55000 world. Einstein likened this situation to a man explaining the
55100 behavior of a watch without opening it: "He will never be able to
55200 compare his picture with the real mechanism and he cannot even
55300 imagine the possibility or meaning of such a comparison".
55400 In constructing an algorithm one puts together an
55500 organization of collaborating functions or procedures. A function
55600 takes some symbolic structure as input and yields some symbolic
55700 structure as output. Two computationally equivalent functions, having
55800 the same input and yielding the same output, can differ `inside' the
55900 function at the instruction level.
56000 Consider an elementary programming problem which students in
56100 symbolic computation are often asked to solve. Given a list L of
56200 symbols, L=(A B C D), as input, construct a function or procedure
56300 which will convert this list to the list RL in which the order of the
56400 symbols is reversed, i.e. RL=(D C B A). There are many ways of
56500 solving this problem and the code of one student may differ greatly
56600 from that of another at the level of individual instructions. But the
56700 differences of such details are irrelevant. What is significant is
56800 that the solutions make the required conversion from L to RL. The
56900 correct solutions will all be computationally equivalent at the
57000 input-output level since they take the same symbolic structures as
57100 input and produce the same symbolic output.
57200 If we propose that an algorithm we have constructed is
57300 functionally equivalent to what goes on in humans when they process
57400 symbolic structures, how can we justify this position ?
57500 Indistinguishability tests at, say, the linguistic level provide
57600 evidence only for beginning equivalence. We would like to be able to
57700 have access to the underlying processes in humans the way we can with
57800 algorithms. (Admittedly, we do not directly observe processes at all
57900 levels but only the products of some). The difficulty lies in
58000 identifying, making accessible, and counting processes in human
58100 heads. Many symbol-processing experiments are now being designed
58200 and carried out. We must have great patience with this type of
58300 experimental information-processing psychology.
58400 In the meantime, besides first-approximation I-O equivalence
58500 and plausibility arguments, one might appeal to extra-evidential
58600 support offering parallelisms from neighboring scientific domains.
58700 One can offer analogies between what is known to go on at a molecular
58800 level in the cells of living organisms and what goes on in an
58900 algorithm. For example, a DNA molecule in the nucleus of a cell
59000 consists of an ordered sequence (list) of nucleotide bases (symbols)
59100 coded in triplets termed codons (words). Each element of the codon
59200 specifies which amino acid during protein synthesis is to be linked
59300 into the chain of polypeptides making up the protein. The codons
59400 function like instructions in a programming language. Some codons are
59500 known to operate as terminal symbols analogous to symbols in an
59600 algorithm which terminate the end of a list. If, as a result of a
59700 mutation, a nucleotide base is changed, the usual protein will not be
59800 synthesized. The polypeptide chain resulting may have lethal or
59900 trivial consequences for the organism depending on what must be
60000 passed on to other processes which require polypeptides to be handed
60100 over to them. Similarly in an algorithm. If a symbol or word in a
60200 procedure is incorrect, the procedure cannot operate in its intended
60300 manner. Such a result may be lethal or trivial to the algorithm
60400 depending on what information the faulty procedure must pass on at
60500 its interface with other procedures in the overall organization. Each
60600 procedure in an algorithm is embedded in an organization of
60700 collaborating procedures just as are functions in living organisms.
60800 We know that at the molecular level of living organisms there exists
60900 a process such as serial progression along a nucleotide sequence,
61000 which is analogous to stepping down a list in an algorithm. Further
61100 analogies can be made between point mutations in which DNA bases can
61200 be inserted, deleted, substituted or reordered and symbolic
61300 computation in which the same operations are commonly carried out on
61400 symbolic structures. Such analogies are interesting as
61500 extra-evidential support but obviously closer linkages are needed
61600 between the macro-level of symbolic processes and the micro-level of
61700 molecular information-processing within cells.
61800 To obtain evidence for the acceptability of a model as true
61900 or authentic, empirical tests are utilized as validation procedures.
62000 Such tests should also tell us which is the best among alternative
62100 versions of a family of models and, indeed among alternative families
62200 of models. Scientific explanations do not stand alone in isolation.
62300 They are evaluated relative to rival contenders for the position of
62400 "best available". Once we accept a theory or model as the best
62500 available, can we be sure it is correct or true? We can never know
62600 with certainty. Theories and models are provisional approximations to
62700 nature destined to become superseded by better ones.